Although user satisfaction is widely used by researchers and practitioners to evaluate information system success, important issues related to its meaning and measurement across population subgroups have not been adequately resolved. To be most useful in decision-making, instruments like end-user computing satisfaction (EUCS), which are designed to evaluate system success, should be robust. That is, they should enable comparisons by providing equivalent measurement across diverse samples that represent the variety of conditions or population subgroups present in organizations. Using a sample of 1,166 responses, the EUCS instrument is tested for measurement invariance across four dimensions--respondent positions, types of application, hardware platforms, and modes of development. While the results suggest that the meaning of user satisfaction is context sensitive and differs across population subgroups, the 12 measurement items are invariant across all four dimensions. The 12-item summed scale enables researchers or practitioners to compare EUCS scores across the instrument's originally intended universe of applicability.
The structure and dimensionality of the user information satisfaction (UIS) construct is an important theoretical issue that has received considerable attention. Building upon the work of Bailey and Pearson (1983), Ives et al. (1983) conduct an exploratory factor analysis and recommend a 13-item instrument (two indicators per item) for measuring user information satisfaction. Ives et al. also contend that UIS is comprised of three component measures (information product, EDP staff and services, and user knowledge or involvement). In a replication using exploratory techniques. Baroudi and Orlikowski (1988) confirm the three factor structure and support the diagnostic utility of the three factor model. Other researchers have suggested a need for caution in using the UIS instrument as a single measure of user satisfaction; they contend that the instrument's three components measure quite different dimensions whose antecedents and consequences should be studied separately. The acceptance of UIS as a standardized instrument requires confirmation that it explains and measures the user information satisfaction construct and its components. Based on a sample of 224 respondents, this research uses confirmatory factor analysis (LISREL) to test alternative models of underlying factor structure and assess the reliability and validity of factors and items. The results provide support for a revised UIS model with four first-order factors and one second-order (higher-order) factor. To cross-validate these results, the authors reexamine two data sets, including the original Baroudi and Orlikowski data, to assess the revised UIS model. The results show that the revised model provides better model-data fit in all three data sets. Thus, the evidence supports the use of: (1) the 13-item instrument as a measure of an overall UIS; and (2) four component factors for explaining the UIS construct.
This article presents a confirmatory factor analysis of the end-user computing satisfaction index. User satisfaction is often thought to be the most important measure of success for information systems. This study uses LISREL VII to test hypothesis against sample data. The sample data was collected from 409 computer end-users from 18 different organizations using 139 computer applications including accounts payable, financial planning, inventory, and computer-aided design. The data was statistically analyzed for a variety of models.
This article contrasts traditional versus end-user computing environments and reports on the development of an instrument which merges ease of use and information product items to measure the satisfaction of users who directly interact with the computer for a specific application. Using a survey of 618 end users, the researchers conducted a factor analysis and modified the instrument. The results suggest a 12-item instrument that measures five components of end-user satisfaction -- content, accuracy, format, ease of use, and timeliness. Evidence of the instrument's discriminant validity is presented. Reliability and validity is assessed by nature and type of application. Finally, standards for evaluating end-user applications are presented, and the instrument's usefulness for achieving more precision in research questions is explored.
Computer-integrated manufacturing (CIM) could well become the most important application of information technology for achieving competitive advantage. Recent advances in manufacturing and information technologies present promising new strategic alternatives for designing business systems. CIM enables firms to respond quickly to market changes, achieve economies of scope, and manage the complexity of a multi-product environment. Attaining the strategic benefits of CIM requires decision support to integrate marketing with design, design with manufacturing, and manufacturing with strategic positioning. This emerging technology requires a partnership of executives in engineering, manufacturing, marketing, and MIS who share a common vision of how CIM makes possible new approaches to designing business systems.
The credibility syndrome is a special case of MIS development failure with a distinctive set of symptoms and causes. In credibility problems, the directors of data processing -- their management approaches and systems development philosophies -- play an important role.